Teaching Kids to Fact-Check AI: The 'How Do You Know?' Habit
7 min read Four words. That's the whole thing. "How do you know?" Ask that question consistently enough, at dinner, during homework, when your kid repeats something they heard from a friend or read o
7 min read
Four words. That's the whole thing.
"How do you know?"
Ask that question consistently enough, at dinner, during homework, when your kid repeats something they heard from a friend or read online or got from an AI, and you'll build something that no school curriculum has figured out how to reliably teach: the automatic reflex to fact-check AI outputs before accepting them.
That reflex is, right now, one of the most valuable things a young person can have. Not because AI is dangerous, but because AI is powerful, ubiquitous, and occasionally wrong in ways that aren't obvious. The people who thrive with it are the ones who know how to check its work.
Your kid can be one of those people.
Why This Question Works
"How do you know?" does something that most information literacy instruction doesn't: it puts the child in the evaluator's seat rather than the receiver's seat.
When you ask a child to verify a claim, you're not telling them the claim is wrong. You're not expressing skepticism about their judgment. You're asking them to show you the path from "I heard this" to "I know this is true."
That's a fundamentally different cognitive posture than accepting information and moving on. And it's the posture that matters when they're working with AI.
The research on why this works points to something called metacognitive oversight, the ability to monitor your own thinking process, evaluate the quality of your reasoning, and adjust when something doesn't hold up. A 2025 study from the University of Michigan examined how students used an AI-assisted hint system in a programming course and found a consistent link between requesting planning hints before diving into a problem and achieving higher grades. Students who paused to plan, rather than jumping straight to debugging after the fact, performed better across varying levels of problem difficulty. The mechanism is the same one "How do you know?" is designed to trigger: a brief, intentional pause before accepting what's in front of you.
"How do you know?" is a metacognitive prompt. It interrupts the reactive loop (receive, accept, move on) and inserts a moment of evaluation. Do that enough times, and the interruption becomes automatic. The child starts asking themselves before you ask them.
That's the habit. That's what you're building.
The Three Levels of the Question
"How do you know?" sounds simple, but it actually has three distinct depths depending on how far you push it. Knowing which level to use makes the habit stick without turning every conversation into an interrogation.
Level 1: Where did you hear that? This is the entry point. You're just asking for a source. "I read it online." "ChatGPT told me." "My friend said." "It was in my textbook." This level teaches source awareness, the recognition that all information comes from somewhere, and the somewhere matters.
Use this level with younger kids, or as the opening move with any age. It's low pressure and high frequency. The goal is just to make source-tracking automatic.
Level 2: Is that a reliable source? This is where it gets interesting. Not all sources are equal, and kids can learn to evaluate them earlier than most parents expect. A textbook from 2015 may be outdated on a rapidly changing topic. An AI response doesn't have a source at all, it synthesizes patterns from training data, which is a very different thing from citing evidence. A news article can be accurate or inaccurate depending on who wrote it and when.
This level works well with kids 10 and older who've been doing Level 1 for a while. The question shifts from "where did you hear it" to "how trustworthy is that source, and why?"
Level 3: How would you verify it independently? This is the full version. Not just where did you hear it, not just whether that source is generally reliable, but what independent check could you run to confirm it? A second source. A primary document. Your own calculation. A quick experiment.
This is the level that transfers directly to AI use. When an AI produces a claim, Level 3 becomes: where would I look to confirm this is actually true? That's the question that catches hallucinations before they make it into a paper, a presentation, or a decision.
You don't need to hit Level 3 every time. Level 1 every day builds more than Level 3 once a month.
Making It Feel Natural, Not Adversarial
The biggest risk with "how do you know?" is tone. Asked with skepticism or impatience, it sounds like an accusation. Asked with genuine curiosity, it sounds like an invitation.
The difference is framing. "How do you know that?" with a raised eyebrow puts a kid on the defensive. "Oh, interesting, how do you know?" treats the question as part of the conversation rather than a challenge to their credibility.
A few things that help:
Ask it about yourself too. When you share something you heard, beat them to it: "I read that the average American spends four hours a day on their phone. I should probably verify that before I repeat it." Modeling the habit normalizes it. Kids don't feel singled out when verification is something everyone does.
Make it a game when possible. The Spot the Lie activity we described in an earlier post is essentially Level 1 and Level 2 in game form. When verification feels like play rather than homework, it builds faster.
Celebrate the catch. When your child spots an error, in an AI response, in something you said, in a news headline, make it a moment. "You checked that? Good catch." The satisfaction of being right about something being wrong is one of the strongest reinforcers for this habit.
Start with topics they care about. Sports statistics, animal facts, historical claims about a period they're studying. Whatever your child is interested in, that's where the habit takes hold fastest. Intrinsic motivation makes verification feel worthwhile rather than obligatory.
How This Applies Directly to AI
Here's the practical bridge to AI use, which you can make explicit or let happen naturally depending on your child's age and comfort level.
When your child uses AI for anything, a research question, a homework problem, a factual claim they want to settle, the "how do you know?" habit applies directly. It applies more urgently, actually, because AI produces output that sounds authoritative and is occasionally wrong in ways that aren't obvious from reading it.
A RAND survey from 2025 found that over 80% of middle and high school students reported their teachers had never explicitly taught them how to evaluate AI outputs. Not a knock on teachers, the guidance hasn't existed. But it means the children who do know how to evaluate AI have a genuine advantage, in school, in work, in any context where they're using AI as a tool rather than a crutch.
The three-level question maps cleanly to AI use:
Where did this come from? AI produced this, which means it synthesized patterns, not evidence.
Is that reliable? For what kinds of questions? AI is more reliable on well-established, frequently documented facts than on recent events, obscure specifics, or anything requiring up-to-date information.
How would I verify this independently? That's the check. That's the moment where the habit converts from knowing AI can be wrong to actually catching it when it is.
The child who does that check automatically isn't just a more careful student. They're building the skill that, according to PwC's analysis of close to a billion job postings, correlates with a 56% wage premium over people without AI competence. The premium comes from being able to work with AI productively, which requires being able to evaluate it critically.
A Simple Starting Routine
If you want to build this habit deliberately, here's a low-commitment structure that works:
Once a day, find one moment to ask "how do you know?" It doesn't need to be about AI. It doesn't need to be high-stakes. Dinner conversation, a claim from a TV show, something they read for school. Just once, consistently.
Once a week, apply it directly to an AI interaction. Pick something your child asked an AI, or ask something together, and walk through the three levels. Where did this come from? How reliable is that? How would we check?
Two touchpoints. The daily one builds the reflex. The weekly one builds the AI-specific application.
Within a few weeks you'll start hearing your child ask the question unprompted, about AI, about things they read, about things other people tell them. That's when you know the habit has taken root.
What You're Really Building
Verification isn't just an AI skill. It's an information skill, a reasoning skill, a life skill. The child who habitually asks "how do you know?" before accepting a claim is better positioned in every domain that involves information, which is every domain.
AI makes this skill more urgent because AI makes unverified information more accessible, more convincing-sounding, and more consequential. But the habit itself is older than AI, and it transfers everywhere.
Four words. Tonight at dinner. That's where it starts.
If you want to go deeper on the research behind why verification habits matter more than AI usage skills, and get notified when we open access to the full AI literacy curriculum we're building (launching August 2026), the waitlist is below. We send research, practical activities, and curriculum previews.
Sources cited in this post:
-
Phung, T. et al. (2025). "Plan More, Debug Less: Applying Metacognitive Theory to AI-Assisted Programming Education." AIED 2025, pp. 3–17. https://arxiv.org/abs/2509.03171
-
Doss, C. et al. (2025). "AI Use in Schools Is Quickly Increasing but Guidance Lags Behind." RAND Corporation. https://doi.org/10.7249/RRA4180-1
-
PwC. (2025). "The Fearless Future: 2025 Global AI Jobs Barometer." https://www.pwc.com/gx/en/issues/artificial-intelligence/job-barometer/2025/report.pdf
Originally published on Hashnode